This page is part of the Initiality Project.
Here we define the “raw syntax” for our type theory. These are sets whose elements are called “raw terms” and “raw types”. They are obtained by putting together operations like “-types” and “-abstraction” in a way that “makes sense locally” but not necessarily globally.
In traditional approaches to logic, one often defines the raw terms (or “well-formed formulas”) as a subset of the set of all “strings”, i.e. finite lists of elements of some “alphabet” set such as . Note that parentheses, colons, commas, and so on are all symbols in the alphabet. In addition, this point of view requires us to alter some of the clauses to include extra parentheses, e.g. we have to write so that has an unambiguous meaning.
However, a more modern perspective is to regard this as defining an inductive type (in the metatheory), with each clause as a constructor. In this case, the raw terms themselves are not lists of symbols with some property, but rather trees whose nodes are labeled with appropriate symbols. This way we obtain automatically an induction principle for defining functions by recursion and proving theorems by induction over the set of terms, which in the strings-of-symbols approach one has to prove by hand. The strings of symbols we write on the page like are then regarded as just a notation for these “abstract syntax trees”.
It’s true that to be completely precise about the entire passage from “mathematics written on the page” to its meaning, one would want to also define the strings of symbols and prove that they faithfully represent the syntax trees. This is analogous to how a compiler for a programming language must start out by “parsing” and “lexing” the code written by the programmer (which is a string of symbols) into an internal “abstract syntax tree” representation. However, such things are well-understood and extremely uninteresting, so much so that programmers often use “parser generator” programs to write parsers for them; the interesting and nontrivial mathematics starts once we have an abstract syntax tree.
Experts will know that there are many ways to deal with variable binding in type theory, including “named variables”, “de Bruijn indices”, “de Bruijn levels”, “locally nameless variables”, and so on. We will use “named variables”. The reason is that our goal is expositional and sociological, and named variables keep the syntax as close as possible to what “users” of type theory are familiar with (e.g. with named variables we write , whereas with de Bruijn indices one has to write instead ). Since we are humans writing for humans to read, we don’t want to deal explicitly with de Bruijn indices, and neither do we want to pretend that we are using de Bruijn indices when we aren’t really using them.
Thus, we suppose given an infinite set , whose elements we call variables. Constructivists should assume that has decidable equality. In addition we assume given an operation which replaces a variable with another one that is “fresh for”, i.e. distinct from all of, a given finite set of variables. That is, we assume that . We also assume for simplicity that if already then . We will also write as , and if is a linearly ordered set of variables we will write
for the result of freshening each element of in succession, ensuring that they remain distinct not only from but also from each other.
We will also frequently use meta-variables. These are simply ordinary mathematical variables in the “metatheory”, i.e. the ordinary informal mathematics we are using in which to define and prove things about type theory. Single upper-case roman letters will usually be meta-variables ranging over raw terms or raw types. Single lower-case roman letters will usually be meta-variables ranging over variables (i.e. elements of ).
Overall, we are not attempting in this project to treat “all type theories” at any level of generality, instead simply dealing with individual type formers one by one (though attempting to be as “modular” as possible). However, at the level of raw syntax we can easily be fully general, thereby saving ourselves some additional work in each case. Thus, we will work with a variant of what in Practical Foundations for Programming Languages are called abstract binding trees, parametrized by a set of sorts and a set of operators.
For us the set of sorts is . That is, there are two classes of raw syntactic objects: raw terms and raw types. The only function of the notion of “sort” is to avoid writing exactly the same things twice in the two cases.
The set of operators is a parameter of the theory; the rules for each type former begin by specifying some relevant operators. For instance, the operators associated to -types are .
Each operator is furthermore associated with a signature or generalized arity, which consists of:
Often (e.g. in PFPL) the scoping relation is a function from to , i.e. each binder has exactly one subterm in its scope. However, it is convenient to be more general; e.g. in a fully-annotated -abstraction it is natural to consider and to be both in the scope of the variable , rather than requiring this to be desugared to a form such as in which potentially-different variables are bound in and . Thus, we will represent as an operator with 3 arguments, two of sort and one of sort , with binding arity 1, and scoping relations and .
Fully general abstract binding trees also allow each binder to be associated to a sort. However, in our type theories all variables will range only over terms (not types).
Finally, in addition to the operators arising from type formers, we assume a collection of constants, each of which is an operator with binding arity and all arguments having sort .
We will usually use single upper-case typewriter font letters like for meta-variables ranging over operators, including constants.
Our raw syntax, though untyped, will be strongly scoped. By this we mean that each raw term or type is indexed by a finite set of variables that may occur in it freely. To be precise, we define inductively a family of sets indexed by ; we call their elements raw terms/types with variables from . Since we are in such generality, there are only two constructors of this inductive family:
For any and any , we have .
Suppose is an operator of sort , argument arity and binding arity . Suppose also , with , , and totally ordered. Given , write for the variables bound in argument , and suppose furthermore that for each we have , where gives the argument sorts of . Then we have an element .
This can be read as saying that each set consists of well-founded rooted trees containing nodes of two kinds:
Nodes labeled by a variable (element of ) that is either in or is bound in some parent of that node and scopes over that node. Such a node has no children and is of sort .
Nodes labeled by an operator and a list of variables of its binding arity that are not in or bound in some parent of that node. Such a node has a number of children equal to the argument arity of its operator label, of appropriate sorts, and each of its variables scopes over the subtrees of those children specified by the scoping relation of its operator label, while its sort is that of its operator label.
The above clauses also introduce notation for these trees: we will write simply “” for the tree consisting of a single variable node with associated variable , and for the tree whose root is an operator node with associated operator , bound variables , and subtrees . (These are the “linear” syntaxes that have to be “parsed” into an abstract syntax tree if implementing type theory on a computer.) For instance, a fully annotated abstraction is desugared as , a node labeled by and with three subtrees, two of which are in the scope of .
The requirement that amounts to a “local Barendregt convention?” that the names of bound variables can never clash with (or “shadow”) any free variables, where the set of “free variables” is declared for each raw term or type (the set ) rather than extracted by syntactic analysis. Maintaining this convention requires “freshening” when we weaken by adding new unused free variables; see below.
For each type former, we describe its binding arity, argument arity and sorts, and scoping relation in a hopefully more readable form, by giving arbitrary names to its bound variables and arguments. We also give the “sugared” or “traditional” syntax that we will usually use for it.
For instance, the row below means that is an operator of sort , it has two arguments of sort and one of sort , and one binder that scopes over the second type argument and the term argument.
operator | sort | vars | type args | term args | scoping | sugared syntax |
---|---|---|---|---|---|---|
, | ||||||
, | , | |||||
, | , |
to be added later…
The principle of structural induction on raw terms and types says that to prove something about all of them, it suffices to consider raw terms and types constructed according to each of the above clauses (i.e. consider all possible labels for the root of an abstract syntax tree), assuming as an inductive hypothesis that the desired statement holds of all subtrees. This is a basic property of an inductive type; if our metatheory is set-theoretic then it can be proven from the definition of well-founded tree.
Similarly, there is a structural recursion principle for defining pairs of functions with domains and : to define such a function it suffices to say, for each inductive clause, how to construct the value of the function on a tree with that root, assuming given its values on all the subtrees.
We want to identify raw terms and types that differ only according to a renaming of bound variables. However, in defining this inductively, we end up having to also rename free variables in the terms where the variables are bound. We do this all together by defining a relation , where , , and is a bijection , by recursion on and as follows:
If and , then if and only if .
For ordered sets of the same cardinality, let be the unique order-preserving bijection between them, and with and as before we have similarly . Then if and only if for all .
In all other cases (e.g. one of and is a variable and the other is an operator, or they are distinct operators), is false.
It is easy to prove by induction that:
In particular, each is an equivalence relation on .
For any function , we define a function , written , by induction on . When is a subset inclusion, this is “weakening”: adding a new unused free variable. But to define weakening inductively, we will need to involve when is an injection that is not a subset inclusion, and it is no extra trouble to define it for any function at all.
If , we define .
If , we would like to define to be “” where . But this is not allowed by our definitions, since may not be disjoint from (though it is disjoint from ). Thus, recall that we have the successive freshening that is disjoint from and an order-preserving bijection . Let be the restriction of to the variables bound in argument ; then recursively we have . If we write , then we can define .
It is easy to prove by induction that if are bijections and , then implies . That is, the structural rules respect -equivalence.
Let and where ; we want to define by induction on .
If , then ; this is well-typed since in this case .
If for some variable , then .
If , we want to recursively substitute for in each . But recall that , so in order to write “” we need , whereas we are given only . Thus, we define , where is the obvious inclusion. Then we can define and hence .
Note that substitution is automatically “capture-avoiding” because of our “strong scoping” and local Barendregt convention. Before we can write , strong scoping requires and to have the same free variables (except for , which only has), and no bound variables in can coincide with these free variables; thus no free variables in can get “captured” when it is substituted under binders in . If and are not “given” with the same free variables, we need to explicitly weaken them first in order to substitute, and the definition of weakening was forced by strong scoping to rename all bound variables to keep them distinct from the newly added free ones; in particular, the bound variables in must get renamed to differ from any free variables in . Finally, the definition of substitution is also forced by strong scoping to rename bound variables in (using the weakening ) to keep them distinct from any bound variables in under which they are substituted, ensuring that still satisfies the Barendregt convention.
Last revised on December 30, 2022 at 07:19:38. See the history of this page for a list of all contributions to it.